4 research outputs found

    Evaluation of Web vulnerability scanners based on OWASP benchmark

    Get PDF
    Web applications have become an integral part of everyday life, but many of these applications are deployed with critical vulnerabilities that can be fatally exploited. Web Vulnerability scanners have been widely adopted for the detection of vulnerabilities in web applications by checking through the applications with the attackers’ perspectives. However, studies have shown that vulnerability scanners perform differently on detection of vulnerabilities. Furthermore, the effectiveness of some of these scanners has become questionable due to the ever-growing cyber-attacks that have been exploiting undetected vulnerabilities in some web applications. To evaluate the effectiveness of these scanners, people often run these scanners against a benchmark web application with known vulnerabilities. This thesis first presents our results on the effectiveness of two popular web vulnerability scanners based on the OWASP benchmark, which is a benchmark developed by OWASP (Open Web Application Security Project), a prestigious non-profit web security organization. The two scanners chosen in this thesis are OWASP Zed Attack Proxy (OWASP ZAP) and Arachni. As there are many categories of web vulnerabilities and we cannot evaluate the scanner performance on all of them due to time limitation, we pick the following four major vulnerability categories in our thesis: Command Injection, Cross-Site Scripting (XSS), Light Weight Access Protocol (LDAP) Injection, and SQL Injection. Moreover, we compare our results on scanner effectiveness from the OWASP benchmark with the existing results from Web Application Vulnerability Security Evaluation Project (WAVSEP) benchmark, another popular benchmark used to evaluate scanner effectiveness. We are the first to make this comparison between these two benchmarks in literature. The results mainly show that: - Scanners perform differently in different vulnerability categories. That is, no scanner can serve as the all-rounder in scanning web vulnerabilities. - The benchmarks also demonstrate different capabilities in reflecting the effectiveness of scanners in different vulnerability categories. It is recommended to combine the results from different benchmarks to determine the effectiveness of a scanner. - Regarding scanner effectiveness, OWASP ZAP performs the best in CMDI, SQLI, and XSS; Arachni performs the best in LDAP. - Regarding benchmark capability, OWASP benchmark outperforms WAVSEP benchmark in all the examined categories

    Evaluation of web vulnerability scanners based on OWASP benchmark

    No full text
    The widespread adoption of web vulnerability scanners and their differences in effectiveness make it necessary to benchmark these scanners. Moreover, the literature lacks the comparison of the results of scanners effectiveness from different benchmarks. In this paper, we first compare the performances of some open source web vulnerability scanners of our careful choice by running them against the OWASP benchmark, which is developed by the Open Web Application Security Project (OWASP), a well-known non-profit web security organization. Furthermore, we compare our results from the OWASP benchmark with the existing results from the Web Application Vulnerability Security Evaluation Project (WAVSEP) benchmark, another popular benchmark used to evaluate scanner effectiveness. We are the first to make a comparison between these two benchmarks in literature. Our evaluation results allow us to make some valuable recommendations for the practice of benchmarking web scanners

    A comparative study on the variants of R metric for network robustness

    No full text
    Robust network infrastructures are essential for the delivery of vital services for our daily lives. With the widespread cyber-attacks on them, measuring the robustness of these networks has become an important issue. In recent years, many robustness metrics have been proposed for this purpose. Among them, a metric called 'R' has received wide attention, and several variants have been proposed. These variants include a metric called Communication Robustness (CR) and the betweenness and closeness variants to both R and CR. However, no evaluations about the correlations among these variants have been made to shed light on their necessity and effectiveness. Addressing this research gap, this paper makes the following contributions. First, we measure the correlations between R and CR to verify that CR is valid to exist due to its unique perspective although it correlates closely with R. Second, we measure the correlations between R and its betweenness/closeness variants to show that they are quite different, and the same is observed for CR and its betweenness/closeness variants. Finally, we propose a new robustness metric called Simplified Communication Robustness (SCR), which simplifies the calculation of CR while working almost the same as CR
    corecore